|
In computing, a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format: it is a floating point number that can be represented without leading zeros in its significand. The magnitude of the smallest normal number in a format is given by ''b''''emin'', where ''b'' is the base (radix) of the format (usually 2 or 10) and ''emin'' depends on the size and layout of the format. Similarly, the magnitude of the largest normal number in a format is given by :''b''''emax'' × (''b'' − ''b''1−''p''), where ''p'' is the precision of the format in digits and ''emax'' is (−''emin'')+1. In the IEEE 754 binary and decimal formats, ''p'', ''emin'', and ''emax'' have the following values: For example, in the smallest decimal format, the range of positive normal numbers is 10−95 through 9.999999 × 1096. Non-zero numbers smaller in magnitude than the smallest normal number are called denormal (or subnormal) numbers. Zero is neither normal nor subnormal. == See also == * Normalized number 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Normal number (computing)」の詳細全文を読む スポンサード リンク
|